Random Time with Differentiable Conditional Distribution Function
نویسندگان
چکیده
منابع مشابه
ec 2 01 3 Random time with differentiable conditional distribution function
In [24] a particular class of one-default market models was presented, where the default times were defined by stochastic differential equations. We learned from this study that random times τ in this class might have their conditional distribution function u → Q[τ ≤ u|Ft] differentiable with respect to an F adapted non decreasing process A, and the derivatives were computed in term of the stoc...
متن کاملHidden conditional random field with distribution constraints for phone classification
We advance the recently proposed hidden conditional random field (HCRF) model by replacing the moment constraints (MCs) with the distribution constraints (DCs). We point out that the distribution constraints are the same as the traditional moment constraints for the binary features but are able to better regularize the probability distribution of the continuousvalued features than the moment co...
متن کاملSoft-DTW: a Differentiable Loss Function for Time-Series
We propose in this paper a differentiable learning loss between time series, building upon the celebrated dynamic time warping (DTW) discrepancy. Unlike the Euclidean distance, DTW can compare time series of variable size and is robust to shifts or dilatations across the time dimension. To compute DTW, one typically solves a minimal-cost alignment problem between two time series using dynamic p...
متن کاملA Rough Differentiable Function
A real-valued continuously differentiable function f on the unit interval is constructed such that ∞ ∑ k=1 βf (x, 2 −k) = ∞ holds for every x ∈ [0, 1]. Here βf (x, 2−k) measures the distance of f to the best approximating linear function at scale 2−k around x.
متن کاملHyphenation with Conditional Random Field
In this project, we approach the problem of English-word hyphenation using a linear-chain conditional random field model. We measure the effectiveness of different feature combinations and two different learning methods: Collins perceptron and stochastic gradient following. We achieve the accuracy rate of 77.95% using stochastic gradient descent.
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Theory of Probability & Its Applications
سال: 2016
ISSN: 0040-585X,1095-7219
DOI: 10.1137/s0040585x97t987909